Chatbots
AI girlfriends are here – but there's a dark side to virtual companions Arwa Mahdawi
It is a truth universally acknowledged, that a single man in possession of a computer must be in want of an AI girlfriend. Certainly a lot of enterprising individuals seem to think there's a lucrative market for digital romance. OpenAI recently launched its GPT Store, where paid ChatGPT users can buy and sell customized chatbots (think Apple's app store, but for chatbots) – and the offerings include a large selection of digital girlfriends. "AI girlfriend bots are already flooding OpenAI's GPT store," a headline from Quartz, who first reported on the issue, blared on Thursday. Quartz went on to note that "the AI girlfriend bots go against OpenAI's usage policy … The company bans GPTs'dedicated to fostering romantic companionship or performing regulated activities'."
What is going on with ChatGPT? Arwa Mahdawi
Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there's been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn't do the task you've set it. Other times it will stop halfway through whatever it's doing and you'll have to plead with it to keep going.
AI and Education: Will Chatbots Soon Tutor Your Children?
Mr. Khan's vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could. In pursuit of such ideals, tech companies and philanthropists over the years have urged schools to purchase a laptop for each child, championed video tutorial platforms and financed learning apps that customize students' lessons. Some online math and literacy interventions have reported positive effects. But many education technology efforts have not proved to significantly close academic achievement gaps or improve student results like high school graduation rates.
Cheating Fears Over Chatbots Were Overblown, New Research Suggests
The Pew survey results suggest that ChatGPT, at least for now, has not become the disruptive phenomenon in schools that proponents and critics forecast. Among the subset of teens who said they had heard about the chatbot, the vast majority -- 81 percent -- said they had not used it to help with their schoolwork. "Most teens do have some level of awareness of ChatGPT," said Jeffrey Gottfried, an associate director of research at Pew. "But this is not a majority of teens who are incorporating it into their schoolwork quite yet." Cheating has long been rampant in schools. In surveys of more than 70,000 high school students between 2002 and 2015, 64 percent said they had cheated on a test.
ChatGPT generates fake data set to support scientific hypothesis
The artificial-intelligence model that powers ChatGPT can create superficially plausible scientific data sets.Credit: Mateusz Slodkowski/SOPA Images/LightRocket via Getty Researchers have used the technology behind the artificial intelligence (AI) chatbot ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim. In a paper published in JAMA Ophthalmology on 9 November1, the authors used GPT-4 -- the latest version of the large language model on which ChatGPT runs -- paired with Advanced Data Analysis (ADA), a model that incorporates the programming language Python and can perform statistical analysis and create data visualizations. The AI-generated data compared the outcomes of two surgical procedures and indicated -- wrongly -- that one treatment is better than the other. "Our aim was to highlight that, in a few minutes, you can create a data set that is not supported by real original data, and it is also opposite or in the other direction compared to the evidence that are available," says study co-author Giuseppe Giannaccare, an eye surgeon at the University of Cagliari in Italy. The ability of AI to fabricate convincing data adds to concern among researchers and journal editors about research integrity.
ChatGPT has entered the classroom: how LLMs could transform education
Last month, educational psychologist Ronald Beghetto asked a group of graduate students and teaching professionals to discuss their work in an unusual way. As well as talking to each other, they conversed with a collection of creativity-focused chatbots that Beghetto had designed and that will soon be hosted on a platform run by his institute, Arizona State University (ASU). The bots are based on the same artificial-intelligence (AI) technology that powers the famous and conversationally fluent ChatGPT. Beghetto prompts the bots to take on various personas to encourage creativity -- for example, by deliberately challenging someone's assumptions. One student discussed various dissertation topics with the chatbots. Lecturers talked about how to design classes.
OpenAI Looks for Its iPhone Moment With Custom GPT Chatbot Apps - CNET
OpenAI, the company whose ChatGPT brought AI chatbots to mainstream awareness, said Monday that it'll let you build special-purpose AI apps using its technology. And with a new app store coming that'll let you find or share these GPTs, as the company is calling these customized artificial intelligence tools, OpenAI looks like it's hoping to have something an iPhone moment. You don't need to know how to program to make a new GPT. You have to give it plain-language instructions, upload some of your own knowledge in the form of PDFs, videos or other files, then steer the bot's purpose in a direction like creating images or searching the web. "GPTs are tailored versions of ChatGPT for a specific purpose," OpenAI Chief Executive Sam Altman said at the OpenAI DevDay conference in San Francisco.
How 'A.I. Agents' That Roam the Internet Could One Day Replace Workers
The widely used chatbot ChatGPT was designed to generate digital text, everything from poetry to term papers to computer programs. But when a team of artificial intelligence researchers at the computer chip company Nvidia got their hands on the chatbot's underlying technology, they realized it could do a lot more. Within weeks, they taught it to play Minecraft, one of the world's most popular video games. Inside Minecraft's digital universe, it learned to swim, gather plants, hunt pigs, mine gold and build houses. "It can go into the Minecraft world and explore by itself and collect materials by itself and get better and better at all kinds of skills," said a Nvidia senior research scientist, Linxi Fan, who is known as Jim.
Incredibly smart or incredibly stupid? What we learned from using ChatGPT for a year
Next month ChatGPT will celebrate its first birthday – marking a year in which the chatbot, for many, turned AI from a futuristic concept to a daily reality. Its universal accessibility has led to a host of concerns, from job losses to disinformation to plagiarism. Over the same period, tens of millions of users have been investigating what the platform can do to make their lives just a little bit easier. Upon its release, users quickly embraced ChatGPT's potential for silliness, asking it to play 20 questions or write its own songs. As its first anniversary approaches, people are using it for a huge range of tasks.
UK data watchdog issues Snapchat enforcement notice over AI chatbot
Snapchat could face a fine of millions of pounds after the UK data watchdog issued it with a preliminary enforcement notice over the alleged failure to assess privacy risks its artificial intelligence chatbot may pose to users and particularly children. The Information Commissioner's Office (ICO) said it had provisionally found that the social media app's owner failed to "adequately identify and assess the risks" to several million UK users of My AI, including among 13- to 17-year-olds. Snapchat has 21 million monthly active users in the UK and has proved to be particularly popular among younger demographics, with the market research company Insider Intelligence estimating that 48% of users are aged 24 or under. About 18% of UK users are aged 12 to 17. "The provisional findings of our investigation suggest a worrying failure by Snap [the parent of Snapchat] to adequately identify and assess the privacy risks to children and other users before launching My AI," said John Edwards, the information commissioner. The ICO said the findings of its investigation were provisional and that Snap has until 27 October to make representations before a final decision is made about taking action. "No conclusion should be drawn at this stage that there has, in fact, been any breach of data protection law or that an enforcement notice will ultimately be issued," the ICO said.